Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tune](deps): Bump pytorch-lightning from 1.0.3 to 1.2.6 in /python/requirements #18

Conversation

dependabot[bot]
Copy link

@dependabot dependabot bot commented on behalf of github Mar 31, 2021

Bumps pytorch-lightning from 1.0.3 to 1.2.6.

Release notes

Sourced from pytorch-lightning's releases.

Standard weekly patch release

[1.2.6] - 2021-03-30

Changed

  • Changed the behavior of on_epoch_start to run at the beginning of validation & test epoch (#6498)

Removed

  • Removed legacy code to include step dictionary returns in callback_metrics. Use self.log_dict instead. (#6682)

Fixed

  • Fixed DummyLogger.log_hyperparams raising a TypeError when running with fast_dev_run=True (#6398)
  • Fixed error on TPUs when there was no ModelCheckpoint (#6654)
  • Fixed trainer.test freeze on TPUs (#6654)
  • Fixed a bug where gradients were disabled after calling Trainer.predict (#6657)
  • Fixed bug where no TPUs were detected in a TPU pod env (#6719)

Contributors

@​awaelchli, @​carmocca, @​ethanwharris, @​kaushikb11, @​rohitgr7, @​tchaton

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Weekly patch release - torchmetrics compatibility

[1.2.5] - 2021-03-23

Changed

  • Added Autocast in validation, test and predict modes for Native AMP (#6565)
  • Update Gradient Clipping for the TPU Accelerator (#6576)
  • Refactored setup for typing friendly (#6590)

Fixed

  • Fixed a bug where all_gather would not work correctly with tpu_cores=8 (#6587)
  • Fixed comparing required versions (#6434)
  • Fixed duplicate logs appearing in console when using the python logging module (#6275)

Contributors

@​awaelchli, @​Borda, @​ethanwharris, @​justusschock, @​kaushikb11

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Standard weekly patch release

[1.2.4] - 2021-03-16

... (truncated)

Changelog

Sourced from pytorch-lightning's changelog.

[1.2.6] - 2021-03-30

Changed

  • Changed the behavior of on_epoch_start to run at the beginning of validation & test epoch (#6498)

Removed

  • Removed legacy code to include step dictionary returns in callback_metrics. Use self.log_dict instead. (#6682)

Fixed

  • Fixed DummyLogger.log_hyperparams raising a TypeError when running with fast_dev_run=True (#6398)
  • Fixed error on TPUs when there was no ModelCheckpoint (#6654)
  • Fixed trainer.test freeze on TPUs (#6654)
  • Fixed a bug where gradients were disabled after calling Trainer.predict (#6657)
  • Fixed bug where no TPUs were detected in a TPU pod env (#6719)

[1.2.5] - 2021-03-23

Changed

  • Update Gradient Clipping for the TPU Accelerator (#6576)
  • Refactored setup for typing friendly (#6590)

Fixed

  • Fixed a bug where all_gather would not work correctly with tpu_cores=8 (#6587)

  • Fixed comparing required versions (#6434)

  • Fixed duplicate logs appearing in console when using the python logging module (#6275)

  • Added Autocast in validation, test and predict modes for Native AMP (#6565)

  • Fixed resolve a bug with omegaconf and xm.save (#6741)

[1.2.4] - 2021-03-16

Changed

  • Changed the default of find_unused_parameters back to True in DDP and DDP Spawn (#6438)

Fixed

  • Expose DeepSpeed loss parameters to allow users to fix loss instability (#6115)
  • Fixed DP reduction with collection (#6324)
  • Fixed an issue where the tuner would not tune the learning rate if also tuning the batch size (#4688)
  • Fixed broadcast to use PyTorch broadcast_object_list and add reduce_decision (#6410)
  • Fixed logger creating directory structure too early in DDP (#6380)
  • Fixed DeepSpeed additional memory use on rank 0 when default device not set early enough (#6460)

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Mar 31, 2021
@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Apr 8, 2021

Superseded by #19.

@dependabot dependabot bot closed this Apr 8, 2021
@dependabot dependabot bot deleted the dependabot/pip/python/requirements/pytorch-lightning-1.2.6 branch April 8, 2021 03:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants